Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 71
Filtrar
1.
medRxiv ; 2024 Mar 26.
Artículo en Inglés | MEDLINE | ID: mdl-38585929

RESUMEN

Randomized clinical trials (RCTs) are essential to guide medical practice; however, their generalizability to a given population is often uncertain. We developed a statistically informed Generative Adversarial Network (GAN) model, RCT-Twin-GAN, that leverages relationships between covariates and outcomes and generates a digital twin of an RCT (RCT-Twin) conditioned on covariate distributions from a second patient population. We used RCT-Twin-GAN to reproduce treatment effect outcomes of the Systolic Blood Pressure Intervention Trial (SPRINT) and the Action to Control Cardiovascular Risk in Diabetes (ACCORD) Blood Pressure Trial, which tested the same intervention but had different treatment effect results. To demonstrate treatment effect estimates of each RCT conditioned on the other RCT patient population, we evaluated the cardiovascular event-free survival of SPRINT digital twins conditioned on the ACCORD cohort and vice versa (SPRINT-conditioned ACCORD twins). The conditioned digital twins were balanced by the intervention arm (mean absolute standardized mean difference (MASMD) of covariates between treatment arms 0.019 (SD 0.018), and the conditioned covariates of the SPRINT-Twin on ACCORD were more similar to ACCORD than a sprint (MASMD 0.0082 SD 0.016 vs. 0.46 SD 0.20). Most importantly, across iterations, SPRINT conditioned ACCORD-Twin datasets reproduced the overall non-significant effect size seen in ACCORD (5-year cardiovascular outcome hazard ratio (95% confidence interval) of 0.88 (0.73-1.06) in ACCORD vs median 0.87 (0.68-1.13) in the SPRINT conditioned ACCORD-Twin), while the ACCORD conditioned SPRINT-Twins reproduced the significant effect size seen in SPRINT (0.75 (0.64-0.89) vs median 0.79 (0.72-0.86)) in ACCORD conditioned SPRINT-Twin). Finally, we describe the translation of this approach to real-world populations by conditioning the trials on an electronic health record population. Therefore, RCT-Twin-GAN simulates the direct translation of RCT-derived treatment effects across various patient populations with varying covariate distributions.

2.
medRxiv ; 2024 Feb 18.
Artículo en Inglés | MEDLINE | ID: mdl-38405776

RESUMEN

Timely and accurate assessment of electrocardiograms (ECGs) is crucial for diagnosing, triaging, and clinically managing patients. Current workflows rely on a computerized ECG interpretation using rule-based tools built into the ECG signal acquisition systems with limited accuracy and flexibility. In low-resource settings, specialists must review every single ECG for such decisions, as these computerized interpretations are not available. Additionally, high-quality interpretations are even more essential in such low-resource settings as there is a higher burden of accuracy for automated reads when access to experts is limited. Artificial Intelligence (AI)-based systems have the prospect of greater accuracy yet are frequently limited to a narrow range of conditions and do not replicate the full diagnostic range. Moreover, these models often require raw signal data, which are unavailable to physicians and necessitate costly technical integrations that are currently limited. To overcome these challenges, we developed and validated a format-independent vision encoder-decoder model - ECG-GPT - that can generate free-text, expert-level diagnosis statements directly from ECG images. The model shows robust performance, validated on 2.6 million ECGs across 6 geographically distinct health settings: (1) 2 large and diverse US health systems- Yale-New Haven and Mount Sinai Health Systems, (2) a consecutive ECG dataset from a central ECG repository from Minas Gerais, Brazil, (3) the prospective cohort study, UK Biobank, (4) a Germany-based, publicly available repository, PTB-XL, and (5) a community hospital in Missouri. The model demonstrated consistently high performance (AUROC≥0.81) across a wide range of rhythm and conduction disorders. This can be easily accessed via a web-based application capable of receiving ECG images and represents a scalable and accessible strategy for generating accurate, expert-level reports from images of ECGs, enabling accurate triage of patients globally, especially in low-resource settings.

3.
Circ Arrhythm Electrophysiol ; 17(4): e012424, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38390713

RESUMEN

BACKGROUND: The National Cardiovascular Data Registry Left Atrial Appendage Occlusion Registry (LAAO) includes the vast majority of transcatheter LAAO procedures performed in the United States. The objective of this study was to develop a model predicting adverse events among patients undergoing LAAO with Watchman FLX. METHODS: Data from 41 001 LAAO procedures with Watchman FLX from July 2020 to September 2021 were used to develop and validate a model predicting in-hospital major adverse events. Randomly selected development (70%, n=28 530) and validation (30%, n=12 471) cohorts were analyzed with 1000 bootstrapped samples, using forward stepwise logistic regression to create the final model. A simplified bedside risk score was also developed using this model. RESULTS: Increased age, female sex, low preprocedure hemoglobin, no prior attempt at atrial fibrillation termination, and increased fall risk most strongly predicted in-hospital major adverse events and were included in the final model along with other clinically relevant variables. The median in-hospital risk-standardized adverse event rate was 1.50% (range, 1.03%-2.84%; interquartile range, 1.42%-1.64%). The model demonstrated moderate discrimination (development C-index, 0.67 [95% CI, 0.65-0.70] and validation C-index, 0.66 [95% CI, 0.62-0.70]) with good calibration. The simplified risk score was well calibrated with risk of in-hospital major adverse events ranging from 0.26% to 3.90% for a score of 0 to 8, respectively. CONCLUSIONS: A transcatheter LAAO risk model using National Cardiovascular Data Registry and LAAO Registry data can predict in-hospital major adverse events, demonstrated consistency across hospitals and can be used for quality improvement efforts. A simple bedside risk score was similarly predictive and may inform shared decision-making.


Asunto(s)
Apéndice Atrial , Fibrilación Atrial , Accidente Cerebrovascular , Humanos , Femenino , Accidente Cerebrovascular/epidemiología , Accidente Cerebrovascular/etiología , Accidente Cerebrovascular/prevención & control , Apéndice Atrial/cirugía , Estudios Retrospectivos , Fibrilación Atrial/diagnóstico , Fibrilación Atrial/cirugía , Factores de Riesgo , Resultado del Tratamiento
4.
J Am Med Inform Assoc ; 31(4): 855-865, 2024 Apr 03.
Artículo en Inglés | MEDLINE | ID: mdl-38269618

RESUMEN

OBJECTIVE: Artificial intelligence (AI) detects heart disease from images of electrocardiograms (ECGs). However, traditional supervised learning is limited by the need for large amounts of labeled data. We report the development of Biometric Contrastive Learning (BCL), a self-supervised pretraining approach for label-efficient deep learning on ECG images. MATERIALS AND METHODS: Using pairs of ECGs from 78 288 individuals from Yale (2000-2015), we trained a convolutional neural network to identify temporally separated ECG pairs that varied in layouts from the same patient. We fine-tuned BCL-pretrained models to detect atrial fibrillation (AF), gender, and LVEF < 40%, using ECGs from 2015 to 2021. We externally tested the models in cohorts from Germany and the United States. We compared BCL with ImageNet initialization and general-purpose self-supervised contrastive learning for images (simCLR). RESULTS: While with 100% labeled training data, BCL performed similarly to other approaches for detecting AF/Gender/LVEF < 40% with an AUROC of 0.98/0.90/0.90 in the held-out test sets, it consistently outperformed other methods with smaller proportions of labeled data, reaching equivalent performance at 50% of data. With 0.1% data, BCL achieved AUROC of 0.88/0.79/0.75, compared with 0.51/0.52/0.60 (ImageNet) and 0.61/0.53/0.49 (simCLR). In external validation, BCL outperformed other methods even at 100% labeled training data, with an AUROC of 0.88/0.88 for Gender and LVEF < 40% compared with 0.83/0.83 (ImageNet) and 0.84/0.83 (simCLR). DISCUSSION AND CONCLUSION: A pretraining strategy that leverages biometric signatures of different ECGs from the same patient enhances the efficiency of developing AI models for ECG images. This represents a major advance in detecting disorders from ECG images with limited labeled data.


Asunto(s)
Fibrilación Atrial , Aprendizaje Profundo , Humanos , Inteligencia Artificial , Electrocardiografía , Biometría
5.
Artículo en Inglés | MEDLINE | ID: mdl-38083199

RESUMEN

Hospitalized patients sometimes have complex health conditions, such as multiple diseases, underlying diseases, and complications. The heterogeneous patient conditions may have various representations. A generalized model ignores the differences among heterogeneous patients, and personalized models, even with transfer learning, are still limited to the small amount of training data and the repeated training process. Meta-learning provides a solution for training similar patients based on few-shot learning; however, cannot address common cross-domain patients. Inspired by prototypical networks [1], we proposed a meta-prototype for Electronic Health Records (EHR), a meta-learning-based model with flexible prototypes representing the heterogeneity in patients. We apply this technique to cardiovascular diseases in MIMIC-III and compare it against a set of benchmark models, and demonstrate its ability to address heterogeneous patient health conditions and improve the model performances from 1.2% to 11.9% on different metrics and prediction tasks.Clinical relevance- Developing an adaptive EHR risk prediction model for outcomes-driven phenotyping of heterogeneous patient health conditions.


Asunto(s)
Aprendizaje , Aprendizaje Automático , Humanos , Registros Electrónicos de Salud
6.
ACM Trans Comput Healthc ; 4(4): 1-18, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37908872

RESUMEN

Observational medical data present unique opportunities for analysis of medical outcomes and treatment decision making. However, because these datasets do not contain the strict pairing of randomized control trials, matching techniques are to draw comparisons among patients. A key limitation to such techniques is verification that the variables used to model treatment decision making are also relevant in identifying the risk of major adverse events. This article explores a deep mixture of experts approach to jointly learn how to match patients and model the risk of major adverse events in patients. Although trained with information regarding treatment and outcomes, after training, the proposed model is decomposable into a network that clusters patients into phenotypes from information available before treatment. This model is validated on a dataset of patients with acute myocardial infarction complicated by cardiogenic shock. The mixture of experts approach can predict the outcome of mortality with an area under the receiver operating characteristic curve of 0.85 ± 0.01 while jointly discovering five potential phenotypes of interest. The technique and interpretation allow for identifying clinically relevant phenotypes that may be used both for outcomes modeling as well as potentially evaluating individualized treatment effects.

7.
medRxiv ; 2023 Sep 14.
Artículo en Inglés | MEDLINE | ID: mdl-37745527

RESUMEN

Objective: Artificial intelligence (AI) detects heart disease from images of electrocardiograms (ECGs), however traditional supervised learning is limited by the need for large amounts of labeled data. We report the development of Biometric Contrastive Learning (BCL), a self-supervised pretraining approach for label-efficient deep learning on ECG images. Materials and Methods: Using pairs of ECGs from 78,288 individuals from Yale (2000-2015), we trained a convolutional neural network to identify temporally-separated ECG pairs that varied in layouts from the same patient. We fine-tuned BCL-pretrained models to detect atrial fibrillation (AF), gender, and LVEF<40%, using ECGs from 2015-2021. We externally tested the models in cohorts from Germany and the US. We compared BCL with random initialization and general-purpose self-supervised contrastive learning for images (simCLR). Results: While with 100% labeled training data, BCL performed similarly to other approaches for detecting AF/Gender/LVEF<40% with AUROC of 0.98/0.90/0.90 in the held-out test sets, it consistently outperformed other methods with smaller proportions of labeled data, reaching equivalent performance at 50% of data. With 0.1% data, BCL achieved AUROC of 0.88/0.79/0.75, compared with 0.51/0.52/0.60 (random) and 0.61/0.53/0.49 (simCLR). In external validation, BCL outperformed other methods even at 100% labeled training data, with AUROC of 0.88/0.88 for Gender and LVEF<40% compared with 0.83/0.83 (random) and 0.84/0.83 (simCLR). Discussion and Conclusion: A pretraining strategy that leverages biometric signatures of different ECGs from the same patient enhances the efficiency of developing AI models for ECG images. This represents a major advance in detecting disorders from ECG images with limited labeled data.

8.
Artículo en Inglés | MEDLINE | ID: mdl-37768790

RESUMEN

Accurate estimation of physiological biomarkers using raw waveform data from non-invasive wearable devices requires extensive data preprocessing. An automatic noise detection method in time-series data would offer significant utility for various domains. As data labeling is onerous, having a minimally supervised abnormality detection method for input data, as well as an estimation of the severity of the signal corruptness, is essential. We propose a model-free, time-series biomedical waveform noise detection framework using a Variational Autoencoder coupled with Gaussian Mixture Models, which can detect a range of waveform abnormalities without annotation, providing a confidence metric for each segment. Our technique operates on biomedical signals that exhibit periodicity of heart activities. This framework can be applied to any machine learning or deep learning model as an initial signal validator component. Moreover, the confidence score generated by the proposed framework can be incorporated into different models' optimization to construct confidence-aware modeling. We conduct experiments using dynamic time warping (DTW) distance of segments to validated cardiac cycle morphology. The result confirms that our approach removes noisy cardiac cycles and the remaining signals, classified as clean, exhibit a 59.92% reduction in the standard deviation of DTW distances. Using a dataset of bio-impedance data of 97885 cardiac cycles, we further demonstrate a significant improvement in the downstream task of cuffless blood pressure estimation, with an average reduction of 2.67 mmHg root mean square error (RMSE) of Diastolic Blood pressure and 2.13 mmHg RMSE of systolic blood pressure, with increases of average Pearson correlation of 0.28 and 0.08, with a statistically significant improvement of signal-to-noise ratio respectively in the presence of different synthetic noise sources. This enables burden-free validation of wearable sensor data for downstream biomedical applications.

9.
Eur Heart J ; 44(43): 4592-4604, 2023 11 14.
Artículo en Inglés | MEDLINE | ID: mdl-37611002

RESUMEN

BACKGROUND AND AIMS: Early diagnosis of aortic stenosis (AS) is critical to prevent morbidity and mortality but requires skilled examination with Doppler imaging. This study reports the development and validation of a novel deep learning model that relies on two-dimensional (2D) parasternal long axis videos from transthoracic echocardiography without Doppler imaging to identify severe AS, suitable for point-of-care ultrasonography. METHODS AND RESULTS: In a training set of 5257 studies (17 570 videos) from 2016 to 2020 [Yale-New Haven Hospital (YNHH), Connecticut], an ensemble of three-dimensional convolutional neural networks was developed to detect severe AS, leveraging self-supervised contrastive pretraining for label-efficient model development. This deep learning model was validated in a temporally distinct set of 2040 consecutive studies from 2021 from YNHH as well as two geographically distinct cohorts of 4226 and 3072 studies, from California and other hospitals in New England, respectively. The deep learning model achieved an area under the receiver operating characteristic curve (AUROC) of 0.978 (95% CI: 0.966, 0.988) for detecting severe AS in the temporally distinct test set, maintaining its diagnostic performance in geographically distinct cohorts [0.952 AUROC (95% CI: 0.941, 0.963) in California and 0.942 AUROC (95% CI: 0.909, 0.966) in New England]. The model was interpretable with saliency maps identifying the aortic valve, mitral annulus, and left atrium as the predictive regions. Among non-severe AS cases, predicted probabilities were associated with worse quantitative metrics of AS suggesting an association with various stages of AS severity. CONCLUSION: This study developed and externally validated an automated approach for severe AS detection using single-view 2D echocardiography, with potential utility for point-of-care screening.


Asunto(s)
Estenosis de la Válvula Aórtica , Aprendizaje Profundo , Humanos , Ecocardiografía , Estenosis de la Válvula Aórtica/diagnóstico por imagen , Estenosis de la Válvula Aórtica/complicaciones , Válvula Aórtica/diagnóstico por imagen , Ultrasonografía
10.
NPJ Digit Med ; 6(1): 124, 2023 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-37433874

RESUMEN

Artificial intelligence (AI) can detect left ventricular systolic dysfunction (LVSD) from electrocardiograms (ECGs). Wearable devices could allow for broad AI-based screening but frequently obtain noisy ECGs. We report a novel strategy that automates the detection of hidden cardiovascular diseases, such as LVSD, adapted for noisy single-lead ECGs obtained on wearable and portable devices. We use 385,601 ECGs for development of a standard and noise-adapted model. For the noise-adapted model, ECGs are augmented during training with random gaussian noise within four distinct frequency ranges, each emulating real-world noise sources. Both models perform comparably on standard ECGs with an AUROC of 0.90. The noise-adapted model performs significantly better on the same test set augmented with four distinct real-world noise recordings at multiple signal-to-noise ratios (SNRs), including noise isolated from a portable device ECG. The standard and noise-adapted models have an AUROC of 0.72 and 0.87, respectively, when evaluated on ECGs augmented with portable ECG device noise at an SNR of 0.5. This approach represents a novel strategy for the development of wearable-adapted tools from clinical ECG repositories.

11.
Circulation ; 148(9): 765-777, 2023 08 29.
Artículo en Inglés | MEDLINE | ID: mdl-37489538

RESUMEN

BACKGROUND: Left ventricular (LV) systolic dysfunction is associated with a >8-fold increased risk of heart failure and a 2-fold risk of premature death. The use of ECG signals in screening for LV systolic dysfunction is limited by their availability to clinicians. We developed a novel deep learning-based approach that can use ECG images for the screening of LV systolic dysfunction. METHODS: Using 12-lead ECGs plotted in multiple different formats, and corresponding echocardiographic data recorded within 15 days from the Yale New Haven Hospital between 2015 and 2021, we developed a convolutional neural network algorithm to detect an LV ejection fraction <40%. The model was validated within clinical settings at Yale New Haven Hospital and externally on ECG images from Cedars Sinai Medical Center in Los Angeles, CA; Lake Regional Hospital in Osage Beach, MO; Memorial Hermann Southeast Hospital in Houston, TX; and Methodist Cardiology Clinic of San Antonio, TX. In addition, it was validated in the prospective Brazilian Longitudinal Study of Adult Health. Gradient-weighted class activation mapping was used to localize class-discriminating signals on ECG images. RESULTS: Overall, 385 601 ECGs with paired echocardiograms were used for model development. The model demonstrated high discrimination across various ECG image formats and calibrations in internal validation (area under receiving operation characteristics [AUROCs], 0.91; area under precision-recall curve [AUPRC], 0.55); and external sets of ECG images from Cedars Sinai (AUROC, 0.90 and AUPRC, 0.53), outpatient Yale New Haven Hospital clinics (AUROC, 0.94 and AUPRC, 0.77), Lake Regional Hospital (AUROC, 0.90 and AUPRC, 0.88), Memorial Hermann Southeast Hospital (AUROC, 0.91 and AUPRC 0.88), Methodist Cardiology Clinic (AUROC, 0.90 and AUPRC, 0.74), and Brazilian Longitudinal Study of Adult Health cohort (AUROC, 0.95 and AUPRC, 0.45). An ECG suggestive of LV systolic dysfunction portended >27-fold higher odds of LV systolic dysfunction on transthoracic echocardiogram (odds ratio, 27.5 [95% CI, 22.3-33.9] in the held-out set). Class-discriminative patterns localized to the anterior and anteroseptal leads (V2 and V3), corresponding to the left ventricle regardless of the ECG layout. A positive ECG screen in individuals with an LV ejection fraction ≥40% at the time of initial assessment was associated with a 3.9-fold increased risk of developing incident LV systolic dysfunction in the future (hazard ratio, 3.9 [95% CI, 3.3-4.7]; median follow-up, 3.2 years). CONCLUSIONS: We developed and externally validated a deep learning model that identifies LV systolic dysfunction from ECG images. This approach represents an automated and accessible screening strategy for LV systolic dysfunction, particularly in low-resource settings.


Asunto(s)
Electrocardiografía , Disfunción Ventricular Izquierda , Adulto , Humanos , Estudios Prospectivos , Estudios Longitudinales , Disfunción Ventricular Izquierda/diagnóstico por imagen , Función Ventricular Izquierda/fisiología
12.
IEEE J Biomed Health Inform ; 27(9): 4273-4284, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37363851

RESUMEN

We propose our Confidence-Aware Particle Filter (CAPF) framework that analyzes a series of estimated changes in blood pressure (BP) to provide several true state hypotheses for a given instance. Particularly, our novel confidence-awareness mechanism assigns likelihood scores to each hypothesis in an effort to discard potentially erroneous measurements - based on the agreement amongst a series of estimated changes and the physiological plausibility when considering DBP/SBP pairs. The particle filter formulation (or sequential Monte Carlo method) can jointly consider the hypotheses and their probabilities over time to provide a stable trend of estimated BP measurements. In this study, we evaluate BP trend estimation from an emerging bio-impedance (Bio-Z) prototype wearable modality although it is applicable to all types of physiological modalities. Each subject in the evaluation cohort underwent a hand-gripper exercise, a cold pressor test, and a recovery state to increase the variation to the captured BP ranges. Experiments show that CAPF yields superior continuous pulse pressure (PP), diastolic blood pressure (DBP), and systolic blood pressure (SBP) estimation performance compared to ten baseline approaches. Furthermore, CAPF performs on track to comply with AAMI and BHS standards for achieving a performance classification of Grade A, with mean error accuracies of -0.16 ± 3.75 mmHg for PP (r = 0.81), 0.42 ± 4.39 mmHg for DBP (r = 0.92), and -0.09 ± 6.51 mmHg for SBP (r = 0.92) from more than test 3500 data points.


Asunto(s)
Determinación de la Presión Sanguínea , Hipertensión , Humanos , Presión Sanguínea/fisiología , Determinación de la Presión Sanguínea/métodos
13.
PLOS Digit Health ; 2(6): e0000267, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37310958

RESUMEN

The identification of nocturnal nondipping blood pressure (< 10% drop in mean systolic blood pressure from awake to sleep periods), as captured by ambulatory blood pressure monitoring, is a valuable element of risk prediction for cardiovascular disease, independent of daytime or clinic blood pressure measurements. However, capturing measurements, including determination of wake/sleep periods, is challenging. Accordingly, we sought to evaluate the impact of different definitions and algorithms for defining sleep onset on the classification of nocturnal nondipping. Using approaches based upon participant self-reports, applied definition of a common sleep period (12 am -6 am), manual actigraphy, and automated actigraphy we identified changes to the classification of nocturnal nondipping, and conducted a secondary analysis on the potential impact of an ambulatory blood pressure monitor on sleep. Among 61 participants in the Eastern Caribbean Health Outcomes Research Network hypertension study with complete ambulatory blood pressure monitor and sleep data, the concordance for nocturnal nondipping across methods was 0.54 by Fleiss' Kappa (depending on the method, 36 to 51 participants classified as having nocturnal nondipping). Sleep quality for participants with dipping versus nondipping was significantly different for total sleep length when wearing the ambulatory blood pressure monitor (shorter sleep duration) versus not (longer sleep duration), although there were no differences in sleep efficiency or disturbances. These findings indicate that consideration of sleep time measurements is critical for interpreting ambulatory blood pressure. As technology advances to detect blood pressure and sleep patterns, further investigation is needed to determine which method should be used for diagnosis, treatment, and future cardiovascular risk.

14.
JMIR Form Res ; 7: e46659, 2023 May 16.
Artículo en Inglés | MEDLINE | ID: mdl-37191989

RESUMEN

BACKGROUND: Effective monitoring of dietary habits is critical for promoting healthy lifestyles and preventing or delaying the onset and progression of diet-related diseases, such as type 2 diabetes. Recent advances in speech recognition technologies and natural language processing present new possibilities for automated diet capture; however, further exploration is necessary to assess the usability and acceptability of such technologies for diet logging. OBJECTIVE: This study explores the usability and acceptability of speech recognition technologies and natural language processing for automated diet logging. METHODS: We designed and developed base2Diet-an iOS smartphone application that prompts users to log their food intake using voice or text. To compare the effectiveness of the 2 diet logging modes, we conducted a 28-day pilot study with 2 arms and 2 phases. A total of 18 participants were included in the study, with 9 participants in each arm (text: n=9, voice: n=9). During phase I of the study, all 18 participants received reminders for breakfast, lunch, and dinner at preselected times. At the beginning of phase II, all participants were given the option to choose 3 times during the day to receive 3 times daily reminders to log their food intake for the remainder of the phase, with the ability to modify the selected times at any point before the end of the study. RESULTS: The total number of distinct diet logging events per participant was 1.7 times higher in the voice arm than in the text arm (P=.03, unpaired t test). Similarly, the total number of active days per participant was 1.5 times higher in the voice arm than in the text arm (P=.04, unpaired t test). Furthermore, the text arm had a higher attrition rate than the voice arm, with only 1 participant dropping out of the study in the voice arm, while 5 participants dropped out in the text arm. CONCLUSIONS: The results of this pilot study demonstrate the potential of voice technologies in automated diet capturing using smartphones. Our findings suggest that voice-based diet logging is more effective and better received by users compared to traditional text-based methods, underscoring the need for further research in this area. These insights carry significant implications for the development of more effective and accessible tools for monitoring dietary habits and promoting healthy lifestyle choices.

16.
J Am Med Inform Assoc ; 30(5): 943-952, 2023 04 19.
Artículo en Inglés | MEDLINE | ID: mdl-36905605

RESUMEN

OBJECTIVE: Nonexercise algorithms are cost-effective methods to estimate cardiorespiratory fitness (CRF), but the existing models have limitations in generalizability and predictive power. This study aims to improve the nonexercise algorithms using machine learning (ML) methods and data from US national population surveys. MATERIALS AND METHODS: We used the 1999-2004 data from the National Health and Nutrition Examination Survey (NHANES). Maximal oxygen uptake (VO2 max), measured through a submaximal exercise test, served as the gold standard measure for CRF in this study. We applied multiple ML algorithms to build 2 models: a parsimonious model using commonly available interview and examination data, and an extended model additionally incorporating variables from Dual-Energy X-ray Absorptiometry (DEXA) and standard laboratory tests in clinical practice. Key predictors were identified using Shapley additive explanation (SHAP). RESULTS: Among the 5668 NHANES participants in the study population, 49.9% were women and the mean (SD) age was 32.5 years (10.0). The light gradient boosting machine (LightGBM) had the best performance across multiple types of supervised ML algorithms. Compared with the best existing nonexercise algorithms that could be applied to the NHANES, the parsimonious LightGBM model (RMSE: 8.51 ml/kg/min [95% CI: 7.73-9.33]) and the extended LightGBM model (RMSE: 8.26 ml/kg/min [95% CI: 7.44-9.09]) significantly reduced the error by 15% and 12% (P < .001 for both), respectively. DISCUSSION: The integration of ML and national data source presents a novel approach for estimating cardiovascular fitness. This method provides valuable insights for cardiovascular disease risk classification and clinical decision-making, ultimately leading to improved health outcomes. CONCLUSION: Our nonexercise models provide improved accuracy in estimating VO2 max within NHANES data as compared to existing nonexercise algorithms.


Asunto(s)
Prueba de Esfuerzo , Ejercicio Físico , Adulto , Femenino , Humanos , Masculino , Prueba de Esfuerzo/métodos , Aprendizaje Automático , Encuestas Nutricionales , Oxígeno , Adulto Joven
17.
Circ Cardiovasc Qual Outcomes ; 16(4): e009258, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36883456

RESUMEN

BACKGROUND: Visit-to-visit variability (VVV) in blood pressure values has been reported in clinical studies. However, little is known about VVV in clinical practice and whether it is associated with patient characteristics in real-world setting. METHODS: We conducted a retrospective cohort study to quantify VVV in systolic blood pressure (SBP) values in a real-world setting. We included adults (age ≥18 years) with at least 2 outpatient visits between January 1, 2014 and October 31, 2018 from Yale New Haven Health System. Patient-level measures of VVV included SD and coefficient of variation of a given patient's SBP across visits. We calculated patient-level VVV overall and by patient subgroups. We further developed a multilevel regression model to assess the extent to which VVV in SBP was explained by patient characteristics. RESULTS: The study population included 537 218 adults, with a total of 7 721 864 SBP measurements. The mean age was 53.4 (SD 19.0) years, 60.4% were women, 69.4% were non-Hispanic White, and 18.1% were on antihypertensive medications. Patients had a mean body mass index of 28.4 (5.9) kg/m2 and 22.6%, 8.0%, 9.7%, and 5.6% had a history of hypertension, diabetes, hyperlipidemia, and coronary artery disease, respectively. The mean number of visits per patient was 13.3, over an average period of 2.4 years. The mean (SD) intraindividual SD and coefficient of variation of SBP across visits were 10.6 (5.1) mm Hg and 0.08 (0.04). These measures of blood pressure variation were consistent across patient subgroups defined by demographic characteristics and medical history. In the multivariable linear regression model, only 4% of the variance in absolute standardized difference was attributable to patient characteristics. CONCLUSIONS: The VVV in real-world practice poses challenges for management of patients with hypertension based on blood pressure readings in outpatient settings and suggest the need to go beyond episodic clinic evaluation.


Asunto(s)
Hipertensión , Adulto , Humanos , Femenino , Persona de Mediana Edad , Adolescente , Masculino , Presión Sanguínea , Estudios Retrospectivos , Factores de Riesgo , Hipertensión/diagnóstico , Hipertensión/tratamiento farmacológico , Determinación de la Presión Sanguínea
18.
JAMA ; 329(3): 255-257, 2023 01 17.
Artículo en Inglés | MEDLINE | ID: mdl-36648476

RESUMEN

This study describes the degree to which blood draws occurred among hospitalized patients during traditional sleep hours and investigates trends over time.


Asunto(s)
Centros Médicos Académicos , Flebotomía , Humanos , Hospitalización , Factores de Tiempo
19.
J Diabetes Sci Technol ; 17(1): 217-223, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-34467803

RESUMEN

This article provides an up-to-date review of technological advances in 3 key areas related to diet monitoring and precision nutrition. First, we review developments in mobile applications, with a focus on food photography and artificial intelligence to facilitate the process of diet monitoring. Second, we review advances in 2 types of wearable and handheld sensors that can potentially be used to fully automate certain aspects of diet logging: physical sensors to detect moments of dietary intake, and chemical sensors to estimate the composition of diets and meals. Finally, we review new programs that can generate personalized/precision nutrition recommendations based on measurements of gut microbiota and continuous glucose monitors with artificial intelligence. The article concludes with a discussion of potential pitfalls of some of these technologies.


Asunto(s)
Inteligencia Artificial , Aplicaciones Móviles , Humanos , Dieta , Estado Nutricional , Ingestión de Alimentos
20.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 2988-2992, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36086068

RESUMEN

Understanding how macronutrients (e.g., carbohydrates, protein, fat) affect blood glucose is of broad interest in health and dietary research. The general effects are well known, e.g., adding protein and fat to a carbohydrate-based meal tend to reduce blood glucose. However, there are large individual differences in food metabolism, to where the same meal can lead to different glucose responses across individuals. To address this problem, we present a technique that can be used to simultaneously (1) model macronutrients' effects on glucose levels over time and (2) capture inter-individual differences in sensitivity to macronutrients. The model assumes that each macronutrient adds a basis function to the differences in macronutrient metabolism. The technique performs a linear decomposition of glucose responses, alternating between estimating the macronutrients' effect over time and capturing an individual's sensitivity to macronutrients. On an experimental dataset containing glucose responses to a variety of mixed meals, the technique is able to extract basis functions for the macronutrients that are consistent with their hypothesized effects on PPGRs, and also characterize how macronutrients affect individuals differently.


Asunto(s)
Glucemia , Individualidad , Glucemia/metabolismo , Glucosa , Humanos , Análisis de los Mínimos Cuadrados , Nutrientes
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...